skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Arefeen, Yamin"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Purpose: Magnetic Resonance Imaging (MRI) enables non‐invasive assessment of brain abnormalities during early life development. Permanent magnet scanners operating in the neonatal intensive care unit (NICU) facilitate MRI of sick infants, but have long scan times due to lower signal‐to‐noise ratios (SNR) and limited receive coils. This work accelerates in‐NICU MRI with diffusion probabilistic generative models by developing a training pipeline accounting for these challenges. Methods: We establish a novel training dataset of clinical, 1 Tesla neonatal MR images in collaboration with Aspect Imaging and Sha'are Zedek Medical Center. We propose a pipeline to handle the low quantity and SNR of our real‐world dataset (1) modifying existing network architectures to support varying resolutions; (2) training a single model on all data with learned class embedding vectors; (3) applying self‐supervised denoising before training; and (4) reconstructing by averaging posterior samples. Retrospective under‐sampling experiments, accounting for signal decay, evaluated each item of our proposed methodology. A clinical reader study with practicing pediatric neuroradiologists evaluated our proposed images reconstructed from under‐sampled data. Results: Combining all data, denoising pre‐training, and averaging posterior samples yields quantitative improvements in reconstruction. The generative model decouples the learned prior from the measurement model and functions at two acceleration rates without re‐training. The reader study suggests that proposed images reconstructed from under‐sampled data are adequate for clinical use. Conclusion: Diffusion probabilistic generative models applied with the proposed pipeline to handle challenging real‐world datasets could reduce the scan time of in‐NICU neonatal MRI. 
    more » « less
    Free, publicly-accessible full text available June 17, 2026
  2. Free, publicly-accessible full text available May 10, 2026
  3. Abstract PurposeTo examine the effect of incorporating self‐supervised denoising as a pre‐processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K‐space data employed for training are typically multi‐coil and inherently noisy. Although DL‐based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise‐free datasets is impractical. MethodsWe leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL‐based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model‐Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL‐based methods in solving accelerated multi‐coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2‐weighted brain and fat‐suppressed proton‐density knee scans. ResultsWe observed that self‐supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal‐to‐noise ratio (PSNR) across different SNR levels, including 32, 22, and 12 dB for T2‐weighted brain data, and 24, 14, and 4 dB for fat‐suppressed knee data. ConclusionWe showed that denoising is an essential pre‐processing technique capable of improving the efficacy of DL‐based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising enables training more effective DL networks, potentially bypassing the need for noise‐free reference MRI scans. 
    more » « less
    Free, publicly-accessible full text available June 2, 2026
  4. Free, publicly-accessible full text available May 10, 2026
  5. Implicit Neural Representations (INRs) are a learning-based approach to accelerate Magnetic Resonance Imaging (MRI) acquisitions, particularly in scan-specific settings when only data from the under-sampled scan itself are available. Previous work has shown that INRs improve rapid MRI through inherent regularization imposed by neural network architectures. Typically parameterized by fully connected neural networks, INRs provide continuous image representations by mapping a physical coordinate location to its intensity. Prior approaches have applied unlearned regularization priors during INR training and were limited to 2D or low-resolution 3D acquisitions. Meanwhile, diffusion-based generative models have recently gained attention for learning powerful image priors independent of the measurement model. This work proposes INFusion, a technique that regularizes INR optimization from under-sampled MR measurements using pre-trained diffusion models to enhance reconstruction quality. In addition, a hybrid 3D approach is introduced, enabling INR application on large-scale 3D MR datasets. Experimental results show that in 2D settings, diffusion regularization improves INR training, while in 3D, it enables feasible INR training on matrix sizes of 256 × 256 × 80. 
    more » « less
    Free, publicly-accessible full text available December 9, 2025
  6. Motivation: We explore the “Implicit Data Crime” of datasets whose subsampled k-space is filled using parallel imaging. These datasets are treated as fully-sampled, but their points derive from (1)prospective sampling, and (2)reconstruction of un-sampled points, creating artificial data correlations given low SNR or high acceleration. Goal(s): How will downstream tasks, including reconstruction algorithm comparison and optimal trajectory design, be biased by effects of parallel imaging on a prospectively undersampled dataset? Approach: Comparing reconstruction performance using data that are fully sampled with data that are completed using the SENSE algorithm. Results: Utilizing parallel imaging filled k-space results in biased downstream perception of algorithm performance. Impact: This study demonstrates evidence of overly-optimistic bias resulting from the use of k-space filled in with parallel imaging as ground truth data. Researchers should be aware of this possibility and carefully examine the computational pipeline behind datasets they use. 
    more » « less
  7. Motivation: Publicly available k-space data used for training are inherently noisy with no available ground truth. Goal(s): To denoise k-space data in an unsupervised manner for downstream applications. Approach: We use Generalized Stein’s Unbiased Risk Estimate (GSURE) applied to multi-coil MRI to denoise images without access to ground truth. Subsequently, we train a generative model to show improved accelerated MRI reconstruction. Results: We demonstrate: (1) GSURE can successfully remove noise from k-space; (2) generative priors learned on GSURE-denoised samples produce realistic synthetic samples; and (3) reconstruction performance on subsampled MRI improves using priors trained on denoised images in comparison to training on noisy samples. Impact: This abstract shows that we can denoise multi-coil data without ground truth and train deep generative models directly on noisy k-space in an unsupervised manner, for improved accelerated reconstruction. 
    more » « less